home *** CD-ROM | disk | FTP | other *** search
Wrap
- 1 - 6. _C_o_m_p_a_t_i_b_i_l_i_t_i_e_s__a_n_d__D_e_p_e_n_d_e_n_c_i_e_s This chapter describes the compatibility issues that users should consider when purchasing MPT version 1.4 and later. Text that applies to IRIX systems only is marked "IRIX only." 6.1 _M_P_I_/_S_p_e_e_d_s_h_o_p__(_I_R_I_X__o_n_l_y_) For users running Speedshop with MPI jobs on systems running IRIX 6.5.1 or greater, it is no longer necessary to set the MMMMPPPPIIII____RRRRLLLLDDDD____HHHHAAAACCCCKKKK____OOOOFFFFFFFF environment variable. 6.2 _A_r_r_a_y__S_e_r_v_i_c_e_s__C_h_a_n_g_e__(_I_R_I_X__o_n_l_y_) The default authentication for array services has changed from AAAAUUUUTTTTHHHHEEEENNNNTTTTIIIICCCCAAAATTTTIIIIOOOONNNN NNNNOOOONNNNEEEE to AAAAUUUUTTTTHHHHEEEENNNNTTTTIIIICCCCAAAATTTTIIIIOOOONNNN NNNNOOOORRRREEEEMMMMOOOOTTTTEEEE in array services release 3.3.1. For more information, see advisory 19990701-01-p at the following URL: hhhhttttttttpppp::::////////wwwwwwwwwwww....ssssggggiiii....ccccoooommmm////SSSSuuuuppppppppoooorrrrtttt////sssseeeeccccuuuurrrriiiittttyyyy////sssseeeeccccuuuurrrriiiittttyyyy....hhhhttttmmmmllll Users who run MPI jobs over multiple hosts should use the AAAAUUUUTTTTHHHHEEEENNNNTTTTIIIICCCCAAAATTTTIIIIOOOONNNN SSSSIIIIMMMMPPPPLLLLEEEE authentication. 6.3 _C_+_+__B_i_n_d_i_n_g_s In MPT release 1.4.0.1, the C++ bindings were included in the lllliiiibbbbmmmmppppiiii....ssssoooo file. With the 1.4.0.3 release, the C++ bindings reside in the lllliiiibbbbmmmmppppiiii++++++++....ssssoooo file. For this reason, C++ users who have used MPT 1.4.0.1 will need to use the ----llllmmmmppppiiii++++++++ option to relink their programs, as in the following example: CCCCCCCC ----33332222 ccccoooommmmppppuuuutttteeee....CCCC ----llllmmmmppppiiii++++++++ ----llllmmmmppppiiii 6.4 _G_M__D_e_p_e_n_d_e_n_c_i_e_s A version of GM 1.4 for IRIX is available from SGI. It can be obtained at the following URL: hhhhttttttttpppp::::////////wwwwwwwwwwww....ssssggggiiii....ccccoooommmm////pppprrrroooodddduuuuccccttttssss////eeeevvvvaaaalllluuuuaaaattttiiiioooonnnn////6666....5555....11113333____mmmmyyyyrrrriiiinnnneeeetttt____1111....0000....1111 - 2 - 6.5 _P_r_o_p_a_c_k__D_e_p_e_n_d_e_n_c_i_e_s Propack version 1.3 or higher is recommended with the MPT 1.4.0.3 release. 6.6 _J_o_b__L_i_m_i_t_s To run MPI jobs on IRIX with job limits enabled, your system should be running IRIX 6.5.9f or greater. Because of the large number of file descriptors that MPI jobs require, you should increase the jjjjlllliiiimmmmiiiitttt "files" limit for MPI jobs. For more information on increasing job limits, see the jjjjlllliiiimmmmiiiitttt(1) man page or _I_R_I_X _A_d_m_i_n: _R_e_s_o_u_r_c_e _A_d_m_i_n_i_s_t_r_a_t_i_o_n. You can access this document from the SGI Technical Publications Library at the following URL: hhhhttttttttpppp::::////////tttteeeecccchhhhppppuuuubbbbssss....ssssggggiiii....ccccoooommmm On Linux, because of the large number of file descriptors that MPI jobs require, you might need to increase the system-wide limit on the number of open files. You can do this by executing on each node of your cluster the following command, as root: eeeecccchhhhoooo 11116666888833334444 >>>> ////pppprrrroooocccc////ssssyyyyssss////ffffssss////ffffiiiilllleeee----mmmmaaaaxxxx The 16834 value allows for approximately 256 MPI processes. The boot-time setting is 4096. To suit your needs, you can choose a different value. The number of file descriptors used by an MPI job increases with the number of processes that job is using. You can use the following table as a guide: Number of processes Number of files 16 500 32 1k 64 2k 128 4k 256 8k